-
Notifications
You must be signed in to change notification settings - Fork 1.2k
8185 test refactor 2 #8405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
8185 test refactor 2 #8405
Conversation
…combinations Signed-off-by: R. Garcia-Dias <[email protected]>
…eter combinations Signed-off-by: R. Garcia-Dias <[email protected]>
…nations Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
…tions Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 129f778 I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: ebae4e3 Signed-off-by: R. Garcia-Dias <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
…ner generation Signed-off-by: R. Garcia-Dias <[email protected]>
…aner generation Signed-off-by: R. Garcia-Dias <[email protected]>
…ner generation Signed-off-by: R. Garcia-Dias <[email protected]>
…generation Signed-off-by: R. Garcia-Dias <[email protected]>
…MONAI into 8185-test-refactor-2
…neration Signed-off-by: R. Garcia-Dias <[email protected]>
…aner generation Signed-off-by: R. Garcia-Dias <[email protected]>
…for cleaner generation Signed-off-by: R. Garcia-Dias <[email protected]>
…ner test generation Signed-off-by: R. Garcia-Dias <[email protected]>
…r test generation Signed-off-by: R. Garcia-Dias <[email protected]>
…or cleaner test generation Signed-off-by: R. Garcia-Dias <[email protected]>
…er test generation Signed-off-by: R. Garcia-Dias <[email protected]>
…r test generation Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Co-authored-by: Eric Kerfoot <[email protected]> Signed-off-by: Rafael Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
Signed-off-by: R. Garcia-Dias <[email protected]>
5ae6312
to
2edf166
Compare
This reverts commit 1fc0a5b.
I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 241e24c I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 2edf166 I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: 2430ac8 I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: bb54c57 I, R. Garcia-Dias <[email protected]>, hereby add my Signed-off-by to this commit: cd1b4fb Signed-off-by: R. Garcia-Dias <[email protected]>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @garciadias it looks much better now compared to the current state of these test. I have made a few comments in places which are mostly code suggestions you can just accept and double check are correct, but in general it's good but there's a few places where fewer lines of code can be used or some things adjusted for clarity. Thanks!
TEST_CASES_CAB = [ | ||
[ | ||
{ | ||
"spatial_dims": params["spatial_dims"], | ||
"dim": params["dim"], | ||
"num_heads": params["num_heads"], | ||
"bias": params["bias"], | ||
"flash_attention": False, | ||
}, | ||
(2, params["dim"], *([16] * params["spatial_dims"])), | ||
(2, params["dim"], *([16] * params["spatial_dims"])), | ||
] | ||
for params in dict_product(spatial_dims=[2, 3], dim=[32, 64, 128], num_heads=[2, 4, 8], bias=[True, False]) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEST_CASES_CAB = [ | |
[ | |
{ | |
"spatial_dims": params["spatial_dims"], | |
"dim": params["dim"], | |
"num_heads": params["num_heads"], | |
"bias": params["bias"], | |
"flash_attention": False, | |
}, | |
(2, params["dim"], *([16] * params["spatial_dims"])), | |
(2, params["dim"], *([16] * params["spatial_dims"])), | |
] | |
for params in dict_product(spatial_dims=[2, 3], dim=[32, 64, 128], num_heads=[2, 4, 8], bias=[True, False]) | |
] | |
TEST_CASES_CAB = [ | |
[ | |
{**params, "flash_attention": False}, | |
(2, params["dim"], *([16] * params["spatial_dims"])), | |
(2, params["dim"], *([16] * params["spatial_dims"])), | |
] | |
for params in dict_product(spatial_dims=[2, 3], dim=[32, 64, 128], num_heads=[2, 4, 8], bias=[True, False]) | |
] |
We can reduce this a little more with Python ** syntax.
TEST_CASE_CABLOCK = [ | ||
[ | ||
{ | ||
"hidden_size": params["hidden_size"], | ||
"num_heads": params["num_heads"], | ||
"dropout_rate": params["dropout_rate"], | ||
"rel_pos_embedding": params["rel_pos_embedding_val"] if not params["flash_attn"] else None, | ||
"input_size": params["input_size"], | ||
"use_flash_attention": params["flash_attn"], | ||
}, | ||
(2, 512, params["hidden_size"]), | ||
(2, 512, params["hidden_size"]), | ||
] | ||
for params in dict_product( | ||
dropout_rate=np.linspace(0, 1, 4), | ||
hidden_size=[360, 480, 600, 768], | ||
num_heads=[4, 6, 8, 12], | ||
rel_pos_embedding_val=[None, RelPosEmbedding.DECOMPOSED], | ||
input_size=[(16, 32), (8, 8, 8)], | ||
flash_attn=[True, False], | ||
) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEST_CASE_CABLOCK = [ | |
[ | |
{ | |
"hidden_size": params["hidden_size"], | |
"num_heads": params["num_heads"], | |
"dropout_rate": params["dropout_rate"], | |
"rel_pos_embedding": params["rel_pos_embedding_val"] if not params["flash_attn"] else None, | |
"input_size": params["input_size"], | |
"use_flash_attention": params["flash_attn"], | |
}, | |
(2, 512, params["hidden_size"]), | |
(2, 512, params["hidden_size"]), | |
] | |
for params in dict_product( | |
dropout_rate=np.linspace(0, 1, 4), | |
hidden_size=[360, 480, 600, 768], | |
num_heads=[4, 6, 8, 12], | |
rel_pos_embedding_val=[None, RelPosEmbedding.DECOMPOSED], | |
input_size=[(16, 32), (8, 8, 8)], | |
flash_attn=[True, False], | |
) | |
] | |
TEST_CASE_CABLOCK = [ | |
[ | |
{ | |
**params, | |
"rel_pos_embedding": params["rel_pos_embedding_val"] if not params["flash_attn"] else None, | |
}, | |
(2, 512, params["hidden_size"]), | |
(2, 512, params["hidden_size"]), | |
] | |
for params in dict_product( | |
dropout_rate=np.linspace(0, 1, 4), | |
hidden_size=[360, 480, 600, 768], | |
num_heads=[4, 6, 8, 12], | |
rel_pos_embedding_val=[None, RelPosEmbedding.DECOMPOSED], | |
input_size=[(16, 32), (8, 8, 8)], | |
use_flash_attention=[True, False], | |
) | |
] |
Also this should work.
for params in dict_product( | ||
spatial_dims=range(2, 4), | ||
kernel_size=[1, 3], | ||
stride=[1, 2], | ||
norm_name=["batch", "instance"], | ||
in_size=[15, 16], | ||
trans_bias=[True, False], | ||
): | ||
spatial_dims = params["spatial_dims"] | ||
kernel_size = params["kernel_size"] | ||
stride = params["stride"] | ||
norm_name = params["norm_name"] | ||
in_size = params["in_size"] | ||
trans_bias = params["trans_bias"] | ||
|
||
out_size = in_size * stride | ||
test_case = [ | ||
{ | ||
"spatial_dims": spatial_dims, | ||
"in_channels": in_channels, | ||
"out_channels": out_channels, | ||
"kernel_size": kernel_size, | ||
"norm_name": norm_name, | ||
"stride": stride, | ||
"upsample_kernel_size": stride, | ||
"trans_bias": trans_bias, | ||
}, | ||
(1, in_channels, *([in_size] * spatial_dims)), | ||
(1, out_channels, *([out_size] * spatial_dims)), | ||
(1, out_channels, *([in_size * stride] * spatial_dims)), | ||
] | ||
TEST_UP_BLOCK.append(test_case) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for params in dict_product( | |
spatial_dims=range(2, 4), | |
kernel_size=[1, 3], | |
stride=[1, 2], | |
norm_name=["batch", "instance"], | |
in_size=[15, 16], | |
trans_bias=[True, False], | |
): | |
spatial_dims = params["spatial_dims"] | |
kernel_size = params["kernel_size"] | |
stride = params["stride"] | |
norm_name = params["norm_name"] | |
in_size = params["in_size"] | |
trans_bias = params["trans_bias"] | |
out_size = in_size * stride | |
test_case = [ | |
{ | |
"spatial_dims": spatial_dims, | |
"in_channels": in_channels, | |
"out_channels": out_channels, | |
"kernel_size": kernel_size, | |
"norm_name": norm_name, | |
"stride": stride, | |
"upsample_kernel_size": stride, | |
"trans_bias": trans_bias, | |
}, | |
(1, in_channels, *([in_size] * spatial_dims)), | |
(1, out_channels, *([out_size] * spatial_dims)), | |
(1, out_channels, *([in_size * stride] * spatial_dims)), | |
] | |
TEST_UP_BLOCK.append(test_case) | |
param_dicts = dict_product( | |
spatial_dims=range(2, 4), | |
kernel_size=[1, 3], | |
upsample_kernel_size=[1, 2], | |
norm_name=["batch", "instance"], | |
in_size=[15, 16], | |
trans_bias=[True, False], | |
) | |
for params in param_dicts: | |
spatial_dims = params["spatial_dims"] | |
stride = params["upsample_kernel_size"] | |
in_size = params.pop("in_size") # don't want in_size in the dictionary below | |
out_size = in_size * stride | |
test_case = [ | |
{**params, "in_channels": in_channels, "out_channels": out_channels}, | |
(1, in_channels, *([in_size] * spatial_dims)), | |
(1, out_channels, *([out_size] * spatial_dims)), | |
(1, out_channels, *([in_size * stride] * spatial_dims)), | |
] | |
TEST_UP_BLOCK.append(test_case) |
I think this is equivalent and a little smaller.
from tests.test_utils import dict_product | ||
|
||
TEST_CASE_RESBLOCK = [ | ||
[ | ||
{ | ||
"spatial_dims": params["spatial_dims"], | ||
"in_channels": params["in_channels"], | ||
"kernel_size": params["kernel_size"], | ||
"norm": params["norm"], | ||
}, | ||
(2, params["in_channels"], *([16] * params["spatial_dims"])), | ||
(2, params["in_channels"], *([16] * params["spatial_dims"])), | ||
] | ||
for params in dict_product( | ||
spatial_dims=range(2, 4), | ||
in_channels=range(1, 4), | ||
kernel_size=[1, 3], | ||
norm=[("group", {"num_groups": 1}), "batch", "instance"], | ||
) | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from tests.test_utils import dict_product | |
TEST_CASE_RESBLOCK = [ | |
[ | |
{ | |
"spatial_dims": params["spatial_dims"], | |
"in_channels": params["in_channels"], | |
"kernel_size": params["kernel_size"], | |
"norm": params["norm"], | |
}, | |
(2, params["in_channels"], *([16] * params["spatial_dims"])), | |
(2, params["in_channels"], *([16] * params["spatial_dims"])), | |
] | |
for params in dict_product( | |
spatial_dims=range(2, 4), | |
in_channels=range(1, 4), | |
kernel_size=[1, 3], | |
norm=[("group", {"num_groups": 1}), "batch", "instance"], | |
) | |
] | |
from tests.test_utils import dict_product | |
TEST_CASE_RESBLOCK = [ | |
[ | |
params, | |
(2, params["in_channels"], *([16] * params["spatial_dims"])), | |
(2, params["in_channels"], *([16] * params["spatial_dims"])), | |
] | |
for params in dict_product( | |
spatial_dims=range(2, 4), | |
in_channels=range(1, 4), | |
kernel_size=[1, 3], | |
norm=[("group", {"num_groups": 1}), "batch", "instance"], | |
) | |
] | |
TEST_CASE_TRANSFORMERBLOCK = [ | ||
[ | ||
{ | ||
"hidden_size": params["hidden_size"], | ||
"num_heads": params["num_heads"], | ||
"mlp_dim": params["mlp_dim"], | ||
"dropout_rate": params["dropout_rate"], | ||
"with_cross_attention": params["with_cross_attention"], | ||
}, | ||
(2, 512, params["hidden_size"]), | ||
(2, 512, params["hidden_size"]), | ||
] | ||
for params in dict_product( | ||
dropout_rate=np.linspace(0, 1, 4), | ||
hidden_size=[360, 480, 600, 768], | ||
num_heads=[4, 8, 12], | ||
mlp_dim=[1024, 3072], | ||
with_cross_attention=[False, True], | ||
) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEST_CASE_TRANSFORMERBLOCK = [ | |
[ | |
{ | |
"hidden_size": params["hidden_size"], | |
"num_heads": params["num_heads"], | |
"mlp_dim": params["mlp_dim"], | |
"dropout_rate": params["dropout_rate"], | |
"with_cross_attention": params["with_cross_attention"], | |
}, | |
(2, 512, params["hidden_size"]), | |
(2, 512, params["hidden_size"]), | |
] | |
for params in dict_product( | |
dropout_rate=np.linspace(0, 1, 4), | |
hidden_size=[360, 480, 600, 768], | |
num_heads=[4, 8, 12], | |
mlp_dim=[1024, 3072], | |
with_cross_attention=[False, True], | |
) | |
] | |
TEST_CASE_TRANSFORMERBLOCK = [ | |
[ | |
params, | |
(2, 512, params["hidden_size"]), | |
(2, 512, params["hidden_size"]), | |
] | |
for params in dict_product( | |
dropout_rate=np.linspace(0, 1, 4), | |
hidden_size=[360, 480, 600, 768], | |
num_heads=[4, 8, 12], | |
mlp_dim=[1024, 3072], | |
with_cross_attention=[False, True], | |
) | |
] |
TEST_CASE_UNETR_BASIC_BLOCK = [ | ||
[ | ||
{ | ||
"spatial_dims": params["spatial_dims"], | ||
"in_channels": 16, | ||
"out_channels": 16, | ||
"kernel_size": params["kernel_size"], | ||
"norm_name": params["norm_name"], | ||
"stride": params["stride"], | ||
}, | ||
(1, 16, *([params["in_size"]] * params["spatial_dims"])), | ||
(1, 16, *([_get_out_size(params)] * params["spatial_dims"])), | ||
] | ||
for params in dict_product( | ||
spatial_dims=range(1, 4), | ||
kernel_size=[1, 3], | ||
stride=[2], | ||
norm_name=[("GROUP", {"num_groups": 16}), ("batch", {"track_running_stats": False}), "instance"], | ||
in_size=[15, 16], | ||
) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEST_CASE_UNETR_BASIC_BLOCK = [ | |
[ | |
{ | |
"spatial_dims": params["spatial_dims"], | |
"in_channels": 16, | |
"out_channels": 16, | |
"kernel_size": params["kernel_size"], | |
"norm_name": params["norm_name"], | |
"stride": params["stride"], | |
}, | |
(1, 16, *([params["in_size"]] * params["spatial_dims"])), | |
(1, 16, *([_get_out_size(params)] * params["spatial_dims"])), | |
] | |
for params in dict_product( | |
spatial_dims=range(1, 4), | |
kernel_size=[1, 3], | |
stride=[2], | |
norm_name=[("GROUP", {"num_groups": 16}), ("batch", {"track_running_stats": False}), "instance"], | |
in_size=[15, 16], | |
) | |
] | |
norm_names = [("GROUP", {"num_groups": 16}), ("batch", {"track_running_stats": False}), "instance"] | |
param_dicts = dict_product( | |
spatial_dims=range(1, 4), kernel_size=[1, 3], stride=[2], norm_name=norm_names, in_size=[15, 16] | |
) | |
TEST_CASE_UNETR_BASIC_BLOCK = [] | |
for params in param_dicts: | |
in_size = params.pop("in_size") | |
input_param = {**params, "in_channels": 16, "out_channels": 16} | |
input_shape = (1, 16, *([params["in_size"]] * params["spatial_dims"])) | |
expected_shape = (1, 16, *([_get_out_size(params)] * params["spatial_dims"])) | |
TEST_CASE_UNETR_BASIC_BLOCK.append([input_param, input_shape, expected_shape]) |
Breaking up what you're doing into pieces in different ways can reduce the number of lines used and I think improves readability. The mutli-line for expression often is fine for readability but creating a norm_names
variable and moving definitions around reduces the number of lines used which improves things, and explicit variable names for the components of the test is clearer.
TESTS = [ | ||
(params["keepdim"], params["p"], params["update_meta"], params["list_output"]) | ||
for params in dict_product( | ||
p=TEST_NDARRAYS, keepdim=[True, False], update_meta=[True, False], list_output=[True, False] | ||
) | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TESTS = [ | |
(params["keepdim"], params["p"], params["update_meta"], params["list_output"]) | |
for params in dict_product( | |
p=TEST_NDARRAYS, keepdim=[True, False], update_meta=[True, False], list_output=[True, False] | |
) | |
] | |
TESTS = list(dict_product(keepdim=[True, False], p=TEST_NDARRAYS, update_meta=[True, False], list_output=[True, False])) |
You might not even need list(...)
here.
TESTS.extend( | ||
[ | ||
[ | ||
np.arange(4).reshape((1, 2, 2)) + 1.0, # data | ||
*params["device"], | ||
dst, | ||
{ | ||
"dst_keys": "dst_affine", | ||
"dtype": params["dtype"], | ||
"align_corners": params["align"], | ||
"mode": params["interp_mode"], | ||
"padding_mode": "zeros", | ||
}, | ||
expct, | ||
] | ||
for params in dict_product( | ||
device=TEST_DEVICES, | ||
align=[False, True], | ||
dtype=[torch.float32, torch.float64], | ||
interp_mode=["nearest", "bilinear"], | ||
) | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TESTS.extend( | |
[ | |
[ | |
np.arange(4).reshape((1, 2, 2)) + 1.0, # data | |
*params["device"], | |
dst, | |
{ | |
"dst_keys": "dst_affine", | |
"dtype": params["dtype"], | |
"align_corners": params["align"], | |
"mode": params["interp_mode"], | |
"padding_mode": "zeros", | |
}, | |
expct, | |
] | |
for params in dict_product( | |
device=TEST_DEVICES, | |
align=[False, True], | |
dtype=[torch.float32, torch.float64], | |
interp_mode=["nearest", "bilinear"], | |
) | |
] | |
) | |
TESTS += [ | |
[ | |
np.arange(4).reshape((1, 2, 2)) + 1.0, # data | |
*params.pop("device"), | |
dst, | |
{**params, "dst_keys": "dst_affine", "padding_mode": "zeros"}, | |
expct, | |
] | |
for params in dict_product( | |
device=TEST_DEVICES, | |
align=[False, True], | |
dtype=[torch.float32, torch.float64], | |
interp_mode=["nearest", "bilinear"], | |
) | |
] |
I think this works?
upsample_mode=list(UpsampleMode), | ||
vae_estimate_std=[True, False], | ||
) | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here again the number of lines can be reduced by defining dictionaries with {**params, something_else: 1,...}
pattern.
**({"spatial_dims": 2} if params["nd"] == 2 else {}), | ||
**({"post_activation": False} if params["nd"] == 2 and params["classification"] else {}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel this usage of ** with a bracketed expression isn't so readable, so this whole statements may be better implemented with a for-loop so you can construct the dictionary with actual if
statements. You can also use the {**params,...}
pattern here to reduce the line count a lot.
Fixes #8185
Description
This PR solves items 2 and 3 on #8185 for a few test folders.
I would merge these and proceed with the same type of change in other files if @ericspod approves.
I would like to keep these PRs small, so even if they have the same pattern of changes, merging them bit by bit would make them more manageable.
A few sentences describing the changes proposed in this pull request.
Types of changes
./runtests.sh -f -u --net --coverage
../runtests.sh --quick --unittests --disttests
.make html
command in thedocs/
folder.